4 research outputs found

    The perception of emotion in artificial agents

    Get PDF
    Given recent technological developments in robotics, artificial intelligence and virtual reality, it is perhaps unsurprising that the arrival of emotionally expressive and reactive artificial agents is imminent. However, if such agents are to become integrated into our social milieu, it is imperative to establish an understanding of whether and how humans perceive emotion in artificial agents. In this review, we incorporate recent findings from social robotics, virtual reality, psychology, and neuroscience to examine how people recognize and respond to emotions displayed by artificial agents. First, we review how people perceive emotions expressed by an artificial agent, such as facial and bodily expressions and vocal tone. Second, we evaluate the similarities and differences in the consequences of perceived emotions in artificial compared to human agents. Besides accurately recognizing the emotional state of an artificial agent, it is critical to understand how humans respond to those emotions. Does interacting with an angry robot induce the same responses in people as interacting with an angry person? Similarly, does watching a robot rejoice when it wins a game elicit similar feelings of elation in the human observer? Here we provide an overview of the current state of emotion expression and perception in social robotics, as well as a clear articulation of the challenges and guiding principles to be addressed as we move ever closer to truly emotional artificial agents

    Spatial Sound in a 3D Virtual Environment: All Bark and No Bite?

    No full text
    Although the focus of Virtual Reality (VR) lies predominantly on the visual world, acoustic components enhance the functionality of a 3D environment. To study the interaction between visual and auditory modalities in a 3D environment, we investigated the effect of auditory cues on visual searches in 3D virtual environments with both visual and auditory noise. In an experiment, we asked participants to detect visual targets in a 360° video in conditions with and without environmental noise. Auditory cues indicating the target location were either absent or one of simple stereo or binaural audio, both of which assisted sound localization. To investigate the efficacy of these cues in distracting environments, we measured participant performance using a VR headset with an eye tracker. We found that the binaural cue outperformed both stereo and no auditory cues in terms of target detection irrespective of the environmental noise. We used two eye movement measures and two physiological measures to evaluate task dynamics and mental effort. We found that the absence of a cue increased target search duration and target search path, measured as time to fixation and gaze trajectory lengths, respectively. Our physiological measures of blink rate and pupil size showed no difference between the different stadium and cue conditions. Overall, our study provides evidence for the utility of binaural audio in a realistic, noisy and virtual environment for performing a target detection task, which is a crucial part of everyday behaviour—finding someone in a crowd

    Spatial Sound in a 3D Virtual Environment: All Bark and No Bite?

    No full text
    Although the focus of Virtual Reality (VR) lies predominantly on the visual world, acoustic components enhance the functionality of a 3D environment. To study the interaction between visual and auditory modalities in a 3D environment, we investigated the effect of auditory cues on visual searches in 3D virtual environments with both visual and auditory noise. In an experiment, we asked participants to detect visual targets in a 360° video in conditions with and without environmental noise. Auditory cues indicating the target location were either absent or one of simple stereo or binaural audio, both of which assisted sound localization. To investigate the efficacy of these cues in distracting environments, we measured participant performance using a VR headset with an eye tracker. We found that the binaural cue outperformed both stereo and no auditory cues in terms of target detection irrespective of the environmental noise. We used two eye movement measures and two physiological measures to evaluate task dynamics and mental effort. We found that the absence of a cue increased target search duration and target search path, measured as time to fixation and gaze trajectory lengths, respectively. Our physiological measures of blink rate and pupil size showed no difference between the different stadium and cue conditions. Overall, our study provides evidence for the utility of binaural audio in a realistic, noisy and virtual environment for performing a target detection task, which is a crucial part of everyday behaviour—finding someone in a crowd

    AUTOtech.agil: architecture and technologies for orchestrating automotive agility

    No full text
    Future mobility will be electrified, connected and automated. This opens completely new possibilities for mobility concepts that have the chance to improve not only the quality of life but also road safety for everyone. To achieve this, a transformation of the transportation system as we know it today is necessary. The UNICARagil project, which ran from 2018 to 2023, has produced architectures for driverless vehicles that were demonstrated in four full-scale automated vehicle prototypes for different applications. The AUTOtech.agil project builds upon these results and extends the system boundaries from the vehicles to include the whole intelligent transport system (ITS) comprising, e.g., roadside units, coordinating instances and cloud backends. The consortium was extended mainly by industry partners, including OEMs and tier 1 suppliers with the goal to synchronize the concepts developed in the university-driven UNICARagil project with the automotive industry. Three significant use cases of future mobility motivate the consortium to develop a vision for a Cooperative Intelligent Transport System (C-ITS), in which entities are highly connected and continually learning. The proposed software ecosystem is the foundation for the complex software engineering task that is required to realize such a system. Embedded in this ecosystem, a modular kit of robust service-oriented modules along the effect chain of vehicle automation as well as cooperative and collective functions are developed. The modules shall be deployed in a service-oriented E/E platform. In AUTOtech.agil, standardized interfaces and development tools for such platforms are developed. Additionally, the project focuses on continuous uncertainty consideration expressed as quality vectors. A consistent safety and security concept shall pave the way for the homologation of the researched ITS
    corecore